Substituting Touch for Vision

نویسندگان

  • John Zelek
  • Daniel Asmar
چکیده

We are currently exploring relaying navigational information (e.g., obstacles, terrain, depth) to a visually impaired person using a tactile glove we have developed. The glove consists of a collection of vibrating motors. The collective patterns of motor activity are used for conveying the navigational information which is mapped from an artificial perception system derived from a wearable camera and computer. The tactile glove has a reduced bandwidth when compared to the visual input stream. Three exploratory routes of tactile mapping include: (1) encoding information in terms of a minimally spanning basis set of spatial prepositions; (2) organizing the hand in terms of functionality (e.g., obstacle motors, terrain motors); and (3) a direct foveaperiphery retinal distinction on the hand. The glove strongly relies on the information provided by the artificial perception system. We have explored a probabilistic framework (e.g., Particle filtering) for modelling dynamical visual processes (e.g., tracking, optical flow, depth from stereo). We suspect that a probabilistic encoding is necessary to model the uncertainty in visual processing. In addition, the integration of temporal stream redundancy helps the reliability of the perceived scene. The internal representations developed for this application will also be useful for mobile robot navigation.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Tactile Substitution for Vision

Sensory Substitution (SenSub) is an approach that allows perceiving environmental information that is normally received via one sense (e.g., vision) via another sense (e.g., touch or audition). A typical SenSub system includes three major components: (a) a sensor that senses information typically received by the substituted modality (e.g., visual), (b) a coupling system that can process the sen...

متن کامل

Learning from vision-to-touch is different than learning from touch-to-vision

We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two "natural" tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two "novel" tasks, in which participants wer...

متن کامل

Processing time of addition or withdrawal of single or combined balance-stabilizing haptic and visual information.

We investigated the integration time of haptic and visual input and their interaction during stance stabilization. Eleven subjects performed four tandem-stance conditions (60 trials each). Vision, touch, and both vision and touch were added and withdrawn. Furthermore, vision was replaced with touch and vice versa. Body sway, tibialis anterior, and peroneus longus activity were measured. Followi...

متن کامل

Postural control in down syndrome: the use of somatosensory and visual information to attenuate body sway.

The purpose of this study was to examine the effects of visual and somatosensory information on body sway in individuals with Down syndrome (DS). Nine adults with DS (19-29 years old) and nine control subjects (CS) (19-29 years old) stood in the upright stance in four experimental conditions: no vision and no touch; vision and no touch; no vision and touch; and vision and touch. In the vision c...

متن کامل

Perception of the material properties of wood based on vision, audition, and touch

Most research on the multimodal perception of material properties has investigated the perception of material properties of two modalities such as vision-touch, vision-audition, audition-touch, and vision-action. Here, we investigated whether the same affective classifications of materials can be found in three different modalities of vision, audition, and touch, using wood as the target object...

متن کامل

An event-related brain potential study of cross-modal links in spatial attention between vision and touch.

Event-related potential (ERP) evidence for the existence of cross-modal links in endogenous spatial attention between vision and touch was obtained in an experiment where participants had to detect tactile or visual targets on the attended side and to ignore the irrelevant modality and stimuli on the unattended side. For visual ERPs, attentional modulations of occipital P1 and N1 components wer...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003